The Past: Individual Differences in Cognitive Control
The path I took - Graduate School
Masters and Ph.D. in applied cognitive psychology
- Claremont Graduate University
- Andrew Conway
- Working Memory Span tasks
- Intelligence
The path I took - Early career
Masters
- Thesis: The Effect of Working Memory Training on Cognitive Control and Reading Comprehension
- Modeling Working Memory
Ph.D.
- Simulating intelligence data for POT
- Psychometrics
- Dissertation: A Hierarchical Bayesian Approach to Estimating Individual Differences Reliability in Classic Cognitive Control Tasks
Summary of my dissertation
“The basic ability in cognitive control is maintaining a goal, and its goal-relevant information, in the face of distraction”
Also called:
- attentional control
- executive functioning
- executive attention
Friedman and Miyake: Unity and Diversity Model
- Response Inhibition
- Working Memory Updating
- Taskset Shifting
Cognitive control tasks
Stroop:
- Color (ink) naming task
- congruent (matching color-ink/word)
- incongruent (mismatched color-ink/word)
Assumed underlying mechanism:
- pre-potent response inhibition
Experimental effect
Cognitive control effects:
- Reliable
- Robust
- “Everybody Stroops” - Jeff Rouder
“Every healthy person Stroops”
- Used to measure:
- A.D.D
- Schizophrenia
- Frontal lobe damage
Individual Differences
Stroop and other cognitive control tasks have inconsistencies in validity
Why?
Low below between-subject variance
experimental tasks are created to
- maximize within-subject variance
- e.g.; difference between congruent & incongruent
- minimize between-subject variance
- is treated as measurement noise
Additional reasons
furthermore, researchers proposed
- reliability (Hedge et al., 2018)
- reliability is a bottleneck for correlations
- difference scores (Von Bastian et al., 2020)
- notoriously problematic for psychometrics
- example with Stroop
- aggregate statistics (Rouder & Haaf, 2019)
not a psychometric construct at all! (Rey-Mermet et al., 2018)
- task-specific variance, not a single construct: cognitive control
My dissertation
- Difference scores (Von Bastian et al., 2020)
- Aggregate statistics (Rouder & Haaf, 2019)
- Reliability (Hedge et al., 2018)
poor validity stems from poor reliability?
Plus: Theory-based task manipulations based on the Dual Mechanisms of Cognitive Control framework
My dissertation
Dual Mechanisms of Cognitive Control (Braver, 2007; 2012)
Proactive control refers to a sustained and anticipatory mode of control that allows individuals to actively and optimally configure processing resources before the onset of task demands
Involves a ‘wait-and-see’ mode of control that is triggered by stimuli, and relies upon retrieval of task goals and the rapid mobilization of processing resources after the onset of a cognitively demanding event
My dissertation
4 Tasks:
- Stroop, AX-CPT, Task-Switching, & Sternberg WM
- Baseline, Proactive, Reactive
- Test + Retest: 6 weeks
- ~ 130 subjects
Analysis first phase:
Classical Frequentist Approach
Analysis second phase:
Hierarchical Bayesian Modeling
Frequentist Reliability
Frequentist Validity
Bayesian Reliability
Bayesian Reliability
Reliability Bottlenecks Validity?
if reliability indeed bottlenecks the between-task correlations
and we improved reliability with the HB model
then what do we expect for the convergent validity (between-task correlations)?
Bayesian Phase Validity
Wrap up
![]()
meta-analysis, von Bastian et al., 2020
Wrap up
- Methodological exhaustion?
- Hierarchical Bayesian models
- Drift diffusion models
- SEM/FA
- Previously unseen combinations of methods.
“Neither measurement error nor speed-accuracy trade-offs explain the difficulty of establishing attentional control as a psychometric construct: Evidence from a latent-variable analysis using diffusion modeling”
An old idea
- Not a psychometric construct at all! (Rey-Mermet et al., 2018)
- task-specific variance, not a single construct: cognitive control
- Still curious what else could be done
An old idea
- An old project, 2nd semester masters
![]()
RTs and WMC, McVay & Kane, 2011
An old idea
![]()
RTs and WMC, McVay & Kane, 2011
An old idea
![]()
RTs and WMC, McVay & Kane, 2011
What are some analyses that you use?
- mean (sum in case of survey)
- location/scale (e.g., \(\mathcal{N}\)(\(\mu\), \(\sigma ^2\)))
Noise around the mean instead of the mean?
Intra-Individual Variability (IIV)
Variability is shaped by randomly occurring, inordinately slow trials
- Aristodemou et al., 2022; Kofler et al., 2013; Geurts et al., 2008
Intra-Individual Variability
![]()
Snijder et al., 2021
Intra-Individual Variability
![]()
Snijder et al., 2021
Measuring IIV
- \(\mathbf{\sigma^2}\) is crude
- correlation between RT Mean and RT SD (Wagenmakers & Brown, 2007)
- Coefficient of Variation (CV)
- \(\text{CV} = \frac{\sigma_i}{y_i}\) (SD/Mean)
- Intra-individual Standard Deviation (ISD)
- \({\text{ISD}} = \sqrt{\sum \frac{ (y_{i,t} - y_i)^2} {T_i} }\)
- Ex-Gaussian \(\tau\) parameter
Measuring IIV: trend?
- Variability is shaped by randomly occurring, inordinately slow trials
- Wang et al., 2012:
- systematic (non-random) variability: trends (i.e., learning/improvements)
- CV and ISD:
- Subject who steadily improves/declines, mistaken for high variability
- Ex-Gaussian
- Sensitive to trend effects
???
A novel way of measuring IIV?
![]()
a plane? a bird? no, it’s …
???
A novel way of measuring IIV?
![]()
a plane? a bird? no, it’s DSEM
Short pause
- Next: an introduction DSEM (15 minutes)
- Then: finish with a variant of DSEM